翻訳と辞書
Words near each other
・ Nestor Hammarlund
・ Nestor Ignat
・ Nestor Iskander's Tale on the Taking of Tsargrad
・ Nestor Ivanovich Novozhilov
・ Nestor J. Zaluzec
・ Nestor Jacono
・ Nestor Khergiani
・ Nestor Kotlyarevsky
・ Nestor Kukolnik
・ Nestor L'Hôte
・ Nestor Lakoba
・ Nestor Leynes
・ Nestor Léon Marchand
・ Nestor Makhno
・ Nestor Mata
Nested sampling algorithm
・ Nested set model
・ Nested SQL
・ Nested stack automaton
・ Nested transaction
・ Nested triangles graph
・ Nested word
・ Nestedness
・ NestEgg
・ Nestegis
・ Nestegis apetala
・ Nestegis cunninghamii
・ Nestegis lanceolata
・ Nestegis montana
・ Nestegis sandwicensis


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Nested sampling algorithm : ウィキペディア英語版
Nested sampling algorithm

The nested sampling algorithm is a computational approach to the problem of comparing models in Bayesian statistics, developed in 2004 by physicist John Skilling.
==Background==

Bayes' theorem can be applied to a pair of competing models M1 and M2 for data D, one of which may be true (though which one is not known) but which both cannot simultaneously be true. The posterior probability for M1 may be calculated as follows:
:
\begin
P(M1|D) & \\
& \\
& \frac }
\end

Given no a priori information in favor of M1 or M2, it is reasonable to assign prior probabilities
P(M1)=P(M2)=1/2, so that P(M2)/P(M1)=1. The remaining Bayes factorP(D|M2)/P(D|M1)
is not so easy to evaluate since in general it requires marginalization of
nuisance parameters. Generally, M1 has a collection of parameters that can be
lumped together and called \theta, and M2 has its own vector of parameters
that may be of different dimensionality but is still referred to as \theta.
The marginalization for M1 is
: P(D|M1) = \int d \theta P(D|\theta,M1) P(\theta|M1)
and likewise for M2. This integral is often analytically intractable, and in these cases it is necessary to employ a numerical algorithm to find an approximation. The nested sampling algorithm was developed by John Skilling specifically to approximate these marginalization integrals, and it has the added benefit of generating samples from the posterior distribution P(\theta|D,M1). It is an alternative to methods from the Bayesian literature such as bridge sampling and defensive importance sampling.
Here is a simple version of the nested sampling algorithm, followed by a description of how it computes the marginal probability density Z=P(D|M) where
M is M1 or M2:
Start with N points \theta_1,...,\theta_N sampled from prior.
for i=1 to j do % The number of iterations j is chosen by guesswork.
L_i := \min(current likelihood values of the points);
X_i := \exp(-i/N);
w_i := X_ - X_i
Z := Z + L_i
*w_i;
Save the point with least likelihood as a sample point with weight w_i.
Update the point with least likelihood with some Markov Chain
Monte Carlo steps according to the prior, accepting only steps that
keep the likelihood above L_i.
end
return Z;
At each iteration, X_i is an estimate of the amount of prior mass covered by
the hypervolume in parameter space of all points with likelihood greater than
\theta_i. The weight factor
w_i is
an estimate of the amount of prior mass that lies between two nested
hypersurfaces \
and \. The update step
Z := Z+L_i
*w_i
computes the sum over i of L_i
*w_i to numerically approximate the integral
:
\begin
P(D|M) &=& \int P(D|\theta,M) P(\theta|M) d \theta \\
&=& \int P(D|\theta,M) dP(\theta|M)\\
\end

The idea is to chop up the range of f(\theta) = P(D|\theta,M) and estimate, for each interval (f(\theta_i) ), how likely it is a priori that a randomly chosen \theta would map to this interval. This can be thought of as a Bayesian's way to numerically implement Lebesgue integration.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Nested sampling algorithm」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.